Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bug rate_limit_details returned and a bug for intercom request header #578

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Goran1708
Copy link

Fix bug rate_limit_details returned as nil if net:http failed a request, and bug if intercom request header does not contain X-RateLimit-Reset header

Why?

Why are you making this change?

We have experienced a "bug" when using this library, we implemented our own sidekiq throttle limiter with existing rate_limit_details. It worked fine until at one point(my guess is) NET:HTTP request started failing and rate_limit_details was being returned as nil.

Another potential issue is if intercom does not return X-RateLimit-Reset header, the app will break because it will try to calculate try to subtract some time from Nil.

How?

Technical details on your change

For rate_limit_details:

  1. Not setting request.rate_limit_details if its nil, because it should be empty hash in the initialiser.
  2. Another solution is to pass rate_limit_details to request object. It's kind of an anti-pattern that both classes are initialising the same variable and both of them are using it. We should initialise it in Client, pass it to Request, Request can modify it and send it back if needed.

For bug prone date arithmetic:

  1. Set current time if reset_at is not returned(if its NIL that is) so that the library does not throw used - on NIL object error.
  2. We should not call sleep if the sleep amount is 0.

…st, and bug if intercom request header does not contain X-RateLimit-Reset header
if (retries -= 1) < 0
raise Intercom::RateLimitExceeded, 'Rate limit retries exceeded. Please examine current API Usage.'
else
sleep seconds_to_retry unless seconds_to_retry < 0
sleep seconds_to_retry unless seconds_to_retry <= 0
Copy link

@samrjenkins samrjenkins Dec 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps not worth changing this behaviour in this PR to keep the changes in this branch as atomic as possible. All we need to do is protect against an ArgumentError in the case of negative arguments.
That being said, a more "Ruby" implementation might be

Suggested change
sleep seconds_to_retry unless seconds_to_retry <= 0
sleep seconds_to_retry unless seconds_to_retry.negative?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.negative? is (-1, -2, -3), we want to include (0, -1, -2, -3). :o

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking was that the current <= implementation is just there to prevent an ArgumentError when we call sleep with a negative argument. Do we really want to protect against a 0 argument here?

My feeling is that changing this condition is not really relevant to the change we are looking to make in this PR so we might be best leaving this condition unchanged.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well if possible would optimise it a bit. I think if you want to sleep something for 0 seconds...its kinda off.

Copy link

@samrjenkins samrjenkins Jan 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're keen to make this change then perhaps this?

Suggested change
sleep seconds_to_retry unless seconds_to_retry <= 0
sleep seconds_to_retry if seconds_to_retry.positive?

@@ -72,11 +72,11 @@ def execute(target_base_url = nil, token:, read_timeout: 90, open_timeout: 30, a
parsed_body
rescue Intercom::RateLimitExceeded => e
if @handle_rate_limit
seconds_to_retry = (@rate_limit_details[:reset_at] - Time.now.utc).ceil
seconds_to_retry = ((@rate_limit_details[:reset_at] || Time.now.utc) - Time.now.utc).ceil

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, this line is basically implementing a default seconds_to_retry value in the case that @rate_limit_details[:reset_at] is nil. Would it make more sense for this default value to be >0? If we are hitting a rate limit, it seems to me like the default behaviour should be to wait for a bit before a retry.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if there is a problem with the intercom server, or something like that, where this is not getting reset_at, we don't want to keep sleeping the application because we don't know if we are hitting the rate limit. If we hit the rate limit, the error will be raised where we will try to redo the call. If we hit it multiple times it will just "crash" and we will get a response error in the end user. But yeah...maybe I should convert Time.now.utc into 0. Not needed to use that "expensive" solution.

@@ -156,7 +156,7 @@ def execute_request(request)
request.handle_rate_limit = handle_rate_limit
request.execute(@base_url, token: @token, api_version: @api_version, **timeouts)
ensure
@rate_limit_details = request.rate_limit_details
@rate_limit_details = request.rate_limit_details unless request.rate_limit_details.empty?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean?

Suggested change
@rate_limit_details = request.rate_limit_details unless request.rate_limit_details.empty?
@rate_limit_details = request.rate_limit_details unless request.rate_limit_details.nil?

I think this is your intention from what I understand of the PR description

@SeanHealy33
Copy link
Contributor

Hey @Goran1708

Wondering if this pr is still in progress or if you'd like us to give it a review

@Goran1708
Copy link
Author

Hey @Goran1708

Wondering if this pr is still in progress or if you'd like us to give it a review

Heya! Yeah sure, you can give it a review. Sorry I missed your notification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants